Context

National Context

The Home and Community Based Service (HCBS) Final Rule1 was developed in order to ensure that individuals receiving long-term services and supports through Medicaid HCBS programs2 have full access to benefits of community living and the opportunity to receive services in the most integrated setting appropriate.

Implementation of this final rule is a part of the practical enactment of the the Supreme Court’s 1999 ruling in Olmstead v L.C., requiring that states provide services to persons with disabilities in community settings rather than institutions, if certain conditions are met. In order to define and provide oversight regarding the features of community settings, the final rule establishes:

  • Mandatory requirements for qualities of HCBS settings
  • Specific types of settings that are not home and community-based
  • Specific types of settings that are presumed not to be home and community-based

These are provided in order to support a measurable definition that allows for a focus on the nature and quality of individuals’ experiences.

Michigan Implementation

The Michigan Department of Health and Human Services (MDHHS) must transition its Habilitation Supports Waiver (HSW) and B3 Waiver to be consistent with the Statewide Transition Plan. MDHHS’ Behavioral Health and Developmental Disabilities Administration (BHDDA) contracted with the Michigan Developmental Disabilities Institute (MI-DDI) at Wayne State University to assess compliance with the HCBS ruling among the full population of HSW waiver recipients and their residential and non-residential service providers, including instrument design, utility, and study methodology. The study had the following objectives, per the MI-DDI Executive Summary document:3

  1. To determine HSW beneficiary perceptions of their residential and non-residential providers’ compliance with the HCBS ruling
  2. To determine HSW residential and non-residential provider perceptions of their compliance with the HCBS ruling; and
  3. To compare and contrast beneficiary and provider responses.

Since these surveys measure the perception of providers and beneficiaries, and because the HCBS Final Rule places a high importance on the nature and quality of individuals’ experience of the setting in which they live and work, BHDDA is interested not merely in the provider responses to the survey questions but in whether beneficiary experience of a provider’s services are consistent with the perceptions of the provider. As the survey is one of the few ways beneficiaries may have to indicate their disagreement with an employer or home manager, it is important that their responses be taken seriously and used to augment the interpretation of the survey findings.

This is an interactive report, and can thus be investigated in ways that static paper reports cannot. For a list of available functionality, please see the appendix.

Notes on Data

For this analysis, we use data which has been cleaned and processed by a separate script and stored locally. The entire analysis, including data cleaning process, uses principles of reproducible research for accountability, durability and transparency.

The following steps were applied to the original output from Qualtrics:

  • In order to align with the MI-DDI reporting methodology, we subset the survey data to only include complete surveys, where a survey “is considered complete when the beneficiary and the associated provider surveys were received. Complete surveys were included in the analysis if the beneficiary and provider surveys were received and matched (i.e., both beneficiary and provider responded to questions regarding the same service and service provider).”
  • In this report, participant responses without matched provider responses are excluded because these cannot be evaluated as matched or not matched.
  • In instances where a beneficiary provides an answer to a specific question and the provider does not, these questions are removed from the dataset, and are thus not present in the denominator of reported mismatch rates.
  • providers who have not answered any questions which are crosswalked to participant surveys are removed. These are residential providers who exited out of the survey because they indicated that the setting was a private residence. They are not included in either this report or in any of the compliance-related reports in WSA.

The initial participant dataset includes 5395 participant surveys. In the dataset of paired participant/provider responses, there are 4813 participant surveys completed, 3679 residential provider surveys completed, and 2438 non-residential provider surveys completed.4 Each participant survey is connected to (a) a residential provider survey, (b) one or more non-residential provider surveys, or (c) provider surveys from both a and b.

Summary of Surveys

By Provider Organization

The table below shows the number of paired beneficiary and provider responses for each organization5. It allows viewers to filter, sort and understand the relative size and representation of different providers in the survey data.

By Provider Setting

The stacked barchart below shows the specific provider settings which comprise each of the provider organizations shown in the above table, in order to allow viewing of specific sites for context. As there are 810 provider organizations identified in the data, there are many displayed. You can click and drag on the chart below to zoom in and investigate specific organizations:

Summary of Mismatches

While providers’ responses are being used as a primary source to identify conformance with the HCBS final rule regulation, beneficiaries were also surveyed and their responses can be compared with those of providers in order to obtain a measure of concordance, to see whether the lived experience of individuals in provider settings is similar to what has been reported by those managing such settings.

In the analyses which follow, we explore the mismatch rate for HCBS participants and their provider organizations (both residential and non-residential). At the outset, it is worth defining what we mean by mismatch rate: the proportion of participant responses which were not consistent with their provider’s response to the same question. What this means is that the denominator of this measure is not the total number of surveys, but the total number of question responses from those surveys, since each question presents a new occasion for a beneficiary to either agree or disagree with their provider.

Mismatches by Provider

Overall Mismatch Rate, by Provider

The chart below compares the overall mismatch rate for all provider organizations.

The histogram below shows the distribution of mismatch rate values across all providers:

Mismatches by Provider Type (Res/Non-Res)

The bar chart below shows a summary comparison of mismatch rates by provider type, comparing Residential provider responses with Non-Residential provider responses:

The treemap6 below allows us to investigate which domains and questions have the greatest amount of disagreement between beneficiaries and provider responses. In the visual below, the size of the box corresponds to the number of mismatches, while the color corresponds to the percent of responses which were mismatches. By clicking on the chart, one can investigate the layer ‘under’ the one currently visible. So, if the Provider Type of Residential has the greatest number (i.e. largest size) and percentage (i.e. darkest color), then one can click on that portion of the chart to investigate the next layer within Residential, and see which Domains have the greatest mismatch rate within that set of providers.

Mismatches by Domain/Question

The chart below summarizes the percentage of mismatches for specific questions, in order to investigate which items have the greatest amount of disagreement between participants and providers. Questions are color-coded by the domain in which they are grouped.

Mismatches by Question, by Provider

What are the questions where there is the greatest amount of variation in the mismatch rate across providers? Investigating these may indicate instances where:

  • There are consistently high mismatch rates across providers. These are the items regarding which there is a clearer need to investigate participant and provider perceptions.
  • There are consistently low mismatch rates across providers. These are the items regarding which there is likely the least concern of disagreement.
  • The mismatch rates across providers are more variable. These items have more variable mismatch rates across providers, potentially indicating that the questions were not consistently able to be understood.

Note that in the tables below, we exclude instances where a provider had only one participant responding to a question, since including these would force a 0% or 100% mismatch rate, and thus inflate the variation.7

The boxplot below shows the distribution of mismatch rates across providers on each HCBS survey question.8 The questions are arranged from left to right in descending values of their standard deviation9, so that questions shown on the left have a greater amount of variation across providers, while questions on the right have less variation, but more providers who are identified as outliers (i.e. dots).

Reading this plot, we can note a few things:

  • No single domain (i.e. color) is consistently variable
  • Several questions in the ‘Choice/Control’ domain have consistently high mismatch rates: Q34, Q40, Q43, Q119.10

We can see more details if we look at a summary table11 of the questions:


Outlier Identification

Methodology

The BHDDA HCBS Team considered various approaches to identifying outliers (see Appendix 2) before selecting the interquartile range (IQR) method. This approach was further refined for greater utility by grouping by size of provider and focusing on well-understood questions, as detailed below.

Grouping by Size

The plot below on the left illustrates the basic issue of response size: that providers who answer a greater number of questions are unlikely to have extremely low or high mismatch rates merely due to the small number in the denominator. Thus, the mismatch rates of these providers would never be flagged as outliers, merely owing to their size.

The plot on the right shows the same data, but with providers grouped by size based on the number of participant surveys matched to them.12 The groups have initially been selected manually, and are defined below:

  • Individual: Providers with only a single (1) matched participant survey.
  • Small: Providers with between 2-13 matched participant surveys.
  • Large: Providers with 14 or more matched participant surveys.

The methodology implemented in identifying outliers uses these different groups and implements outlier detection separately on each. This is done in order to mitigate against the impact of providers with a small number of survey responses, as mentioned above. The resulting output will flag providers who are marked as outliers on the upper end of the continuum, since the current analysis is not concerned with providers who have low mismatch rates.

Focusing on Well-Understood Questions

Some questions may have a higher mismatch rate between providers and participants due to difficulty in understanding the precise language and interpretation of the questions’ phrasing. While these questions may be valuable to reconsider for the purposes of survey design and clarification to the field, questions where a higher proportion of responses included “I don’t know” or “Unknown” will be removed from the mismatch rate calculation in order to focus on mismatches which were more likely to occur due to a difference in perception regarding the program itself.13

The chart below shows the proportion of participant responses including “I don’t know” or “Unknown”, per survey question:14

While some of the questions shown above are retained, those with a proportion of responses greater than 5% will be excluded from the mismatch rate calculation.


Providers Identified as Outliers

The adjustments detailed above are applied to the data to calculate the threshold scores beyond which a given provider’s mismatch rate will be flagged as an outlier.

For reference, the thresholds for identification of outliers are summarized below for each grouping size of provider:

The bar chart below shows the number of organizations which are categorized into each size grouping, and the number of each of those groups that is tagged as an outlier using the defined method:


Reviewer Credits

This analysis has benefited from the collaboration, dedication and review of various members of the HCBS Team at MDHHS BHDDA. Special thanks to:

  • Yingxu Zhang, for her dedicated review, creative questions and collaboration to assure data quality and solid methodology in the analysis of this dataset
  • Briana Asselin, for her attention to policy details and patient review to improve the presentation and framing of the issues in this report
  • Millie Shepherd, for fitting an attention to participant responses into the overall implementation strategy for HCBS Final Rule compliance
  • Belinda Hawks, for leadership and forward-looking strategy which balances compliance and innovation, informed by a thoroughly person-centered approach

Reference Material

Appendix 1: About this Document

Users of this document can interact with any of the data visualizations provided in this supplement. Below is a list of the primary ways that each of the data displays can be used:

Table of Contents: The table of contents at the left side of the document can be used to navigate throughout the document. When the user selects a section of the table of contents, the subsections will expand to allow for detailed selection of the portion of document that a user wishes to investigate.

Tables: Tables of data can be sorted based on the values in any of their columns. When a table contains many values, it can be expanded or truncated to show a certain number of rows at a time. In some cases, tables can be filtered based on the values in their columns.

Charts: Charts can be interacted with in any of the following ways:

  • Filtering out specific categories by selecting/deselecting those items in the chart legend
  • Hovering over a chart element to see a description of the data represented
  • Highlighting a certain portion of a chart to zoom in
  • Saving a screenshot of a given chart

Appendix 2: Options for Outlier Identification

The HCBS team at MDHHS wanted to look at the possibility of establishing a baseline mismatch rate at the provider level, which would provide a method to tag providers who had unusually high mismatch rates.

Interquartile Range (IQR) Method

Below we see which providers would be identified as outliers using the interquartile range (IQR) method. This method assumes that if a particular provider’s mismatch rate is above Q3 + 1.5×IQR, it is far enough from the central values to warrant review. This method follows the same logic that boxplots use to flag observations as outliers.

In the chart below, the horizontal line across the center of the bar chart shows the median mismatch rate across all providers, the interior shaded regions show the values which fall within the first and third quartiles, and the exterior shaded regions show the values which fall within the IQR outlier range. Only the values which rise up above the upper boundary of the shaded area would be considered outliers.

While not ideal for larger or heavily skewed datasets, this method strikes a balance between being explainable and accurate.

Two Standard Deviations Method

The horizontal line across the center of the bar chart shows the average mismatch rate across all providers, and the shaded regions show the values which fall within one and two standard deviation from the mean, respectively. In the chart below, only the values which rise up above the upper boundary of the shaded area would be considered outliers.

Because the standard deviation is increased by the existence of outliers, any rule based solely on the standard deviation may perform poorly since it will grow to include outlier values within its range.

Regression Method

Since there are so many different providers, and some of them completed surveys for multiple participants while others completed surveys for only one participant, it is hard to tell how much weight to give a particular bar in the chart above. In the scatterplot below, it is easier to differentiate providers based on the number of responses/mismatches, and a darker color equates to a higher percentage of mismatches. The regression line in the plot shows what number of mismatches might be expected for a provider with a given number of responses. Providers who fall below the line have fewer mismatches than expected, while those above the line have more than expected.

Without additional refinement, this method does not yet exclude enough observations to be operationally useful. In addition, more advanced techniques have the tradeoff of poor interpretability for users and thus a steeper adoption curve.


Endnotes


  1. Published as Medicaid Program; State Plan Home and Community-Based Services, 5-Year Period for Waivers, Provider Payment Reassignment, and Home and Community-Based Setting Requirements for Community First Choice (Section 1915(k) of the Act) and Home and Community-Based Services (HCBS) Waivers (Section 1915(c) of the Act) in the Federal Register. Link here: https://www.federalregister.gov/d/2014-00487

  2. Under the 1915(c), 1915(i) and 1915(k) Medicaid authorities

  3. Further information on survey design and administration is available within the document linked here.

  4. The initial MI-DDI summary of HSW responses reported that “four thousand two hundred and sixty-seven participant/beneficiary surveys (n= 4,267) were received, 3,207 residential provider surveys were received, and 2,315 non-residential provider surveys were completed.” Readers should note that manual review and flagging of data was done by MI-DDI, resulting in differences in counts which cannot entirely be reconciled.

  5. The MI-DDI summary relates the following details regarding the process of fielding the survey: “Each selected beneficiary’s supports coordinator was emailed a survey invitation and was asked to assist and/or interview the beneficiary to complete the survey. In addition to the beneficiary survey, the individual’s associated residential and/or non-residential providers were invited to complete a provider survey.”

  6. Treemaps display hierarchical data (a.k.a. trees) as a set of nested rectangles. Here, the hierarchy is: Provider Type > Domain > Field. Each branch of the hierarchy is given a rectangle, which is then tiled with smaller rectangles representing sub-branches. When you are zoomed in to a particular level, a rectangle has an area proportional to the total number of mismatches. One advantage of treemaps is that they make efficient use of space and can legibly display thousands of items on the screen simultaneously.

  7. That said, it is worth noting that mismatch rates for questions where providers had a low number of participant responses are more likely to be either high or low.

  8. Each data point summarized here corresponds to the mismatch rate of a given provider for a specific question. The number shown on the left side of each boxplot corresponds to the minimum mismatch rate, while the number shown on the right side of each boxplot corresponds to the maximum cost-per-unit. If the value is depicted as a dot rather than a whisker, then the value is a suspected outlier based on the IQR method. The line in the middle of each box corresponds to the median mismatch rate across all organizations responding to the survey, which means that half of individuals had a mismatch rate which was more than this, and half had a unit cost which was less than this. For more information on how to read a boxplot, please reference this link identifying the parts of the diagram and their interpretation.

  9. A measure of the variation or dispersion of a set of values from their mean.

  10. These are each related to the ability to pick certain aspects of ones life (e.g. roommates, etc.) and the high rate of mismatches may point to a disconnect between people being allowed to make decisions as opposed to being empowered to make decisions; a distinction not easily gleaned from survey phrasing.

  11. Tables of data in this report can be sorted based on the values in any of their columns. When a table contains many values, it can be expanded or truncated to show a certain number of rows at a time.

  12. Note that the number of responses per survey may be different based on the number of questions answered per survey.

  13. It is worth noting here that just because questions have a high mismatch rate but lower understandability, this does not necessarily mean that the mismatch does not arise from a fundamental difference of understanding between participants and providers.

  14. Questions without any “I don’t know” or “Unknown” responses are not displayed here.